Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Trends Cogn Sci ; 24(12): 1028-1040, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33158755

RESUMO

Artificial intelligence research has seen enormous progress over the past few decades, but it predominantly relies on fixed datasets and stationary environments. Continual learning is an increasingly relevant area of study that asks how artificial systems might learn sequentially, as biological systems do, from a continuous stream of correlated data. In the present review, we relate continual learning to the learning dynamics of neural networks, highlighting the potential it has to considerably improve data efficiency. We further consider the many new biologically inspired approaches that have emerged in recent years, focusing on those that utilize regularization, modularity, memory, and meta-learning, and highlight some of the most promising and impactful directions.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Humanos , Aprendizagem , Memória
2.
Nature ; 557(7705): 429-433, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29743670

RESUMO

Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go1,2. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation7,10,11, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.


Assuntos
Biomimética/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Navegação Espacial , Animais , Córtex Entorrinal/citologia , Córtex Entorrinal/fisiologia , Meio Ambiente , Células de Grade/fisiologia , Humanos
4.
Proc Natl Acad Sci U S A ; 114(13): 3521-3526, 2017 03 28.
Artigo em Inglês | MEDLINE | ID: mdl-28292907

RESUMO

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.


Assuntos
Redes Neurais de Computação , Algoritmos , Inteligência Artificial , Simulação por Computador , Humanos , Aprendizagem , Memória , Rememoração Mental
5.
Neural Netw ; 24(2): 199-207, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21036537

RESUMO

Neurodynamical models of working memory (WM) should provide mechanisms for storing, maintaining, retrieving, and deleting information. Many models address only a subset of these aspects. Here we present a rather simple WM model in which all of these performance modes are trained into a recurrent neural network (RNN) of the echo state network (ESN) type. The model is demonstrated on a bracket level parsing task with a stream of rich and noisy graphical script input. In terms of nonlinear dynamics, memory states correspond, intuitively, to attractors in an input-driven system. As a supplementary contribution, the article proposes a rigorous formal framework to describe such attractors, generalizing from the standard definition of attractors in autonomous (input-free) dynamical systems.


Assuntos
Memória de Curto Prazo , Modelos Neurológicos , Redes Neurais de Computação , Memória de Curto Prazo/fisiologia , Neurônios/fisiologia , Distribuição Aleatória
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...